Skip to content

Conversation

@renovate-bot
Copy link
Contributor

This PR contains the following updates:

Package Change Age Confidence
langchain-core (source, changelog) ==0.2.33 -> ==0.3.80 age confidence

Warning

Some dependencies could not be looked up. Check the Dependency Dashboard for more information.

GitHub Vulnerability Alerts

GHSA-6qv9-48xg-fc7f

Context

A template injection vulnerability exists in LangChain's prompt template system that allows attackers to access Python object internals through template syntax. This vulnerability affects applications that accept untrusted template strings (not just template variables) in ChatPromptTemplate and related prompt template classes.

Templates allow attribute access (.) and indexing ([]) but not method invocation (()).

The combination of attribute access and indexing may enable exploitation depending on which objects are passed to templates. When template variables are simple strings (the common case), the impact is limited. However, when using MessagesPlaceholder with chat message objects, attackers can traverse through object attributes and dictionary lookups (e.g., __globals__) to reach sensitive data such as environment variables.

The vulnerability specifically requires that applications accept template strings (the structure) from untrusted sources, not just template variables (the data). Most applications either do not use templates or else use hardcoded templates and are not vulnerable.

Affected Components

  • langchain-core package
  • Template formats:
    • F-string templates (template_format="f-string") - Vulnerability fixed
    • Mustache templates (template_format="mustache") - Defensive hardening
    • Jinja2 templates (template_format="jinja2") - Defensive hardening

Impact

Attackers who can control template strings (not just template variables) can:

  • Access Python object attributes and internal properties via attribute traversal
  • Extract sensitive information from object internals (e.g., __class__, __globals__)
  • Potentially escalate to more severe attacks depending on the objects passed to templates

Attack Vectors

1. F-string Template Injection

Before Fix:

from langchain_core.prompts import ChatPromptTemplate

malicious_template = ChatPromptTemplate.from_messages(
    [("human", "{msg.__class__.__name__}")],
    template_format="f-string"
)

# Note that this requires passing a placeholder variable for "msg.__class__.__name__".
result = malicious_template.invoke({"msg": "foo", "msg.__class__.__name__": "safe_placeholder"})

# Previously returned
# >>> result.messages[0].content

# >>> 'str'

2. Mustache Template Injection

Before Fix:

from langchain_core.prompts import ChatPromptTemplate
from langchain_core.messages import HumanMessage

msg = HumanMessage("Hello")

# Attacker controls the template string
malicious_template = ChatPromptTemplate.from_messages(
    [("human", "")],
    template_format="mustache"
)

result = malicious_template.invoke({"question": msg})

# Previously returned: "HumanMessage" (getattr() exposed internals)

3. Jinja2 Template Injection

Before Fix:

from langchain_core.prompts import ChatPromptTemplate
from langchain_core.messages import HumanMessage

msg = HumanMessage("Hello")

# Attacker controls the template string
malicious_template = ChatPromptTemplate.from_messages(
    [("human", "")],
    template_format="jinja2"
)

result = malicious_template.invoke({"question": msg})

# Could access non-dunder attributes/methods on objects

Root Cause

  1. F-string templates: The implementation used Python's string.Formatter().parse() to extract variable names from template strings. This method returns the complete field expression, including attribute access syntax:
    from string import Formatter
    
    template = "{msg.__class__} and {x}"
    print([var_name for (_, var_name, _, _) in Formatter().parse(template)])
    # Returns: ['msg.__class__', 'x']
    The extracted names were not validated to ensure they were simple identifiers. As a result, template strings containing attribute traversal and indexing expressions (e.g., {obj.__class__.__name__} or {obj.method.__globals__[os]}) were accepted and subsequently evaluated during formatting. While f-string templates do not support method calls with (), they do support [] indexing, which could allow traversal through dictionaries like __globals__ to reach sensitive objects.
  2. Mustache templates: By design, used getattr() as a fallback to support accessing attributes on objects (e.g., `` on a User object). However, we decided to restrict this to simpler primitives that subclass dict, list, and tuple types as defensive hardening, since untrusted templates could exploit attribute access to reach internal properties like class on arbitrary objects
  3. Jinja2 templates: Jinja2's default SandboxedEnvironment blocks dunder attributes (e.g., __class__) but permits access to other attributes and methods on objects. While Jinja2 templates in LangChain are typically used with trusted template strings, as a defense-in-depth measure, we've restricted the environment to block all attribute and method access on objects
    passed to templates.

Who Is Affected?

High Risk Scenarios

You are affected if your application:

  • Accepts template strings from untrusted sources (user input, external APIs, databases)
  • Dynamically constructs prompt templates based on user-provided patterns
  • Allows users to customize or create prompt templates

Example vulnerable code:

# User controls the template string itself
user_template_string = request.json.get("template")  # DANGEROUS

prompt = ChatPromptTemplate.from_messages(
    [("human", user_template_string)],
    template_format="mustache"
)

result = prompt.invoke({"data": sensitive_object})

Low/No Risk Scenarios

You are NOT affected if:

  • Template strings are hardcoded in your application code
  • Template strings come only from trusted, controlled sources
  • Users can only provide values for template variables, not the template structure itself

Example safe code:

# Template is hardcoded - users only control variables
prompt = ChatPromptTemplate.from_messages(
    [("human", "User question: {question}")],  # SAFE
    template_format="f-string"
)

# User input only fills the 'question' variable
result = prompt.invoke({"question": user_input})

The Fix

F-string Templates

F-string templates had a clear vulnerability where attribute access syntax was exploitable. We've added strict validation to prevent this:

  • Added validation to enforce that variable names must be valid Python identifiers
  • Rejects syntax like {obj.attr}, {obj[0]}, or {obj.__class__}
  • Only allows simple variable names: {variable_name}
# After fix - these are rejected at template creation time
ChatPromptTemplate.from_messages(
    [("human", "{msg.__class__}")],  # ValueError: Invalid variable name
    template_format="f-string"
)

Mustache Templates (Defensive Hardening)

As defensive hardening, we've restricted what Mustache templates support to reduce the attack surface:

  • Replaced getattr() fallback with strict type checking
  • Only allows traversal into dict, list, and tuple types
  • Blocks attribute access on arbitrary Python objects
# After hardening - attribute access returns empty string
prompt = ChatPromptTemplate.from_messages(
    [("human", "")],
    template_format="mustache"
)
result = prompt.invoke({"msg": HumanMessage("test")})

# Returns: "" (access blocked)

Jinja2 Templates (Defensive Hardening)

As defensive hardening, we've significantly restricted Jinja2 template capabilities:

  • Introduced _RestrictedSandboxedEnvironment that blocks ALL attribute/method access
  • Only allows simple variable lookups from the context dictionary
  • Raises SecurityError on any attribute access attempt
# After hardening - all attribute access is blocked
prompt = ChatPromptTemplate.from_messages(
    [("human", "")],
    template_format="jinja2"
)

# Raises SecurityError: Access to attributes is not allowed

Important Recommendation: Due to the expressiveness of Jinja2 and the difficulty of fully sandboxing it, we recommend reserving Jinja2 templates for trusted sources only. If you need to accept template strings from untrusted users, use f-string or mustache templates with the new restrictions instead.

While we've hardened the Jinja2 implementation, the nature of templating engines makes comprehensive sandboxing challenging. The safest approach is to only use Jinja2 templates when you control the template source.

Important Reminder: Many applications do not need prompt templates. Templates are useful for variable substitution and dynamic logic (if statements, loops, conditionals). However, if you're building a chatbot or conversational application, you can often work directly with message objects (e.g., HumanMessage, AIMessage, ToolMessage) without templates. Direct message construction avoids template-related security concerns entirely.

Remediation

Immediate Actions

  1. Audit your code for any locations where template strings come from untrusted sources
  2. Update to the patched version of langchain-core
  3. Review template usage to ensure separation between template structure and user data

Best Practices

  • Consider if you need templates at all - Many applications can work directly with message objects (HumanMessage, AIMessage, etc.) without templates
  • Reserve Jinja2 for trusted sources - Only use Jinja2 templates when you fully control the template content

Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Never, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR was generated by Mend Renovate. View the repository job log.

@renovate-bot renovate-bot requested review from a team as code owners November 20, 2025 18:00
@trusted-contributions-gcf trusted-contributions-gcf bot added kokoro:force-run Add this label to force Kokoro to re-run the tests. owlbot:run Add this label to trigger the Owlbot post processor. labels Nov 20, 2025
@product-auto-label product-auto-label bot added the samples Issues that are directly related to samples. label Nov 20, 2025
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @renovate-bot, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request updates the langchain-core dependency to version 0.3.80 to mitigate a significant template injection vulnerability. The vulnerability allowed unauthorized access to Python object internals through crafted template strings in LangChain's prompt system. The update introduces strict validation for F-string templates and defensive hardening for Mustache and Jinja2 templates, thereby enhancing the security posture of applications utilizing these components. Users are advised to review their template handling practices, especially concerning untrusted inputs, to ensure full protection.

Highlights

  • Dependency Update: The langchain-core dependency has been updated from v0.2.33 to v0.3.80 across multiple requirements.txt files.
  • Security Vulnerability Fix: This update addresses a critical template injection vulnerability (GHSA-6qv9-48xg-fc7f) in LangChain's prompt template system, which could allow attackers to access Python object internals through untrusted template strings.
  • Template Hardening: Specific fixes and defensive hardening measures have been implemented for F-string, Mustache, and Jinja2 templates to prevent attribute access and restrict capabilities when handling untrusted template strings, significantly reducing the attack surface.
  • Remediation and Best Practices: The update includes recommendations for users to audit their code for untrusted template sources, update to the patched version, and review template usage, emphasizing that Jinja2 templates should ideally be reserved for trusted sources only.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@kokoro-team kokoro-team removed the kokoro:force-run Add this label to force Kokoro to re-run the tests. label Nov 20, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly updates the langchain-core dependency to version 0.3.80, addressing the critical template injection vulnerability GHSA-6qv9-48xg-fc7f. The update is applied consistently across all relevant requirements.txt files, which is a necessary and important security remediation.

While reviewing, I noticed significant duplication of dependencies across the various requirements.txt files. To improve long-term maintainability and simplify future updates, I've left a suggestion to consider centralizing common dependencies into a shared file. This is a recommendation for future improvement and doesn't block this important security update.

google-auth==2.38.0
anthropic[vertex]==0.28.0
langchain-core==0.2.33
langchain-core==0.3.80
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

While this version bump is correct and necessary for security, I've observed that this dependency and several others are duplicated across numerous requirements.txt files within the generative_ai/ directory. This practice can make dependency management cumbersome and error-prone, as any update needs to be manually synchronized across all files.

To improve maintainability, I recommend consolidating common dependencies into a single, shared file (e.g., generative_ai/requirements.common.txt). Each specific requirements.txt file can then include these common dependencies using the -r requirements.common.txt directive. This would centralize version management and simplify future updates.

This change is outside the scope of the current security fix but would be a valuable improvement to address in a follow-up pull request.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

owlbot:run Add this label to trigger the Owlbot post processor. samples Issues that are directly related to samples.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants